multi-label classification problem
Coherent Hierarchical Multi-Label Classification Networks
Hierarchical multi-label classification (HMC) is a challenging classification task extending standard multi-label classification problems by imposing a hierarchy constraint on the classes. In this paper, we propose C-HMCNN(h), a novel approach for HMC problems, which, given a network h for the underlying multi-label classification problem, exploits the hierarchy information in order to produce predictions coherent with the constraint and improve performance. We conduct an extensive experimental analysis showing the superior performance of C-HMCNN(h) when compared to state-of-the-art models.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > Canada (0.04)
- Europe > United Kingdom > Wales (0.04)
- (2 more...)
- North America > United States > Texas (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Florida > Broward County > Fort Lauderdale (0.04)
- (3 more...)
Coherent Hierarchical Multi-Label Classification Networks
Hierarchical multi-label classification (HMC) is a challenging classification task extending standard multi-label classification problems by imposing a hierarchy constraint on the classes. In this paper, we propose C-HMCNN(h), a novel approach for HMC problems, which, given a network h for the underlying multi-label classification problem, exploits the hierarchy information in order to produce predictions coherent with the constraint and improve performance. We conduct an extensive experimental analysis showing the superior performance of C-HMCNN(h) when compared to state-of-the-art models.
Feature-aware Label Space Dimension Reduction for Multi-label Classification
Label space dimension reduction (LSDR) is an efficient and effective paradigm for multi-label classification with many classes. Existing approaches to LSDR, such as compressive sensing and principal label space transformation, exploit only the label part of the dataset, but not the feature part. In this paper, we propose a novel approach to LSDR that considers both the label and the feature parts. The approach, called conditional principal label space transformation, is based on minimizing an upper bound of the popular Hamming loss. The minimization step of the approach can be carried out efficiently by a simple use of singular value decomposition. In addition, the approach can be extended to a kernelized version that allows the use of sophisticated feature combinations to assist LSDR. The experimental results verify that the proposed approach is more effective than existing ones to LSDR across many real-world datasets.
- Asia > Taiwan (0.05)
- Asia > Middle East > Jordan (0.04)
On the Optimality of Classifier Chain for Multi-label Classification
To capture the interdependencies between labels in multi-label classification problems, classifier chain (CC) tries to take the multiple labels of each instance into account under a deterministic high-order Markov Chain model. Since its performance is sensitive to the choice of label order, the key issue is how to determine the optimal label order for CC. In this work, we first generalize the CC model over a random label order. Then, we present a theoretical analysis of the generalization error for the proposed generalized model. Based on our results, we propose a dynamic programming based classifier chain (CC-DP) algorithm to search the globally optimal label order for CC and a greedy classifier chain (CC-Greedy) algorithm to find a locally optimal CC. Comprehensive experiments on a number of real-world multi-label data sets from various domains demonstrate that our proposed CC-DP algorithm outperforms state-of-the-art approaches and the CC-Greedy algorithm achieves comparable prediction performance with CC-DP.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Texas (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (4 more...)
Bag of Tricks for Long-Tailed Multi-Label Classification on Chest X-Rays
Hong, Feng, Dai, Tianjie, Yao, Jiangchao, Zhang, Ya, Wang, Yanfeng
Clinical classification of chest radiography is particularly challenging for standard machine learning algorithms due to its inherent long-tailed and multi-label nature. However, few attempts take into account the coupled challenges posed by both the class imbalance and label co-occurrence, which hinders their value to boost the diagnosis on chest X-rays (CXRs) in the real-world scenarios. Besides, with the prevalence of pretraining techniques, how to incorporate these new paradigms into the current framework lacks of the systematical study. This technical report presents a brief description of our solution in the ICCV CVAMD 2023 CXR-LT Competition. We empirically explored the effectiveness for CXR diagnosis with the integration of several advanced designs about data augmentation, feature extractor, classifier design, loss function reweighting, exogenous data replenishment, etc. In addition, we improve the performance through simple test-time data augmentation and ensemble. Our framework finally achieves 0.349 mAP on the competition test set, ranking in the top five.
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
r-softmax: Generalized Softmax with Controllable Sparsity Rate
Bałazy, Klaudia, Struski, Łukasz, Śmieja, Marek, Tabor, Jacek
Nowadays artificial neural network models achieve remarkable results in many disciplines. Functions mapping the representation provided by the model to the probability distribution are the inseparable aspect of deep learning solutions. Although softmax is a commonly accepted probability mapping function in the machine learning community, it cannot return sparse outputs and always spreads the positive probability to all positions. In this paper, we propose r-softmax, a modification of the softmax, outputting sparse probability distribution with controllable sparsity rate. In contrast to the existing sparse probability mapping functions, we provide an intuitive mechanism for controlling the output sparsity level. We show on several multi-label datasets that r-softmax outperforms other sparse alternatives to softmax and is highly competitive with the original softmax. We also apply r-softmax to the self-attention module of a pre-trained transformer language model and demonstrate that it leads to improved performance when fine-tuning the model on different natural language processing tasks.
Label Attention Network for sequential multi-label classification: you were looking at a wrong self-attention
Kovtun, Elizaveta, Boeva, Galina, Zabolotnyi, Artem, Burnaev, Evgeny, Spindler, Martin, Zaytsev, Alexey
Most of the available user information can be represented as a sequence of timestamped events. Each event is assigned a set of categorical labels whose future structure is of great interest. For instance, our goal is to predict a group of items in the next customer's purchase or tomorrow's client transactions. This is a multi-label classification problem for sequential data. Modern approaches focus on transformer architecture for sequential data introducing self-attention for the elements in a sequence. In that case, we take into account events' time interactions but lose information on label inter-dependencies. Motivated by this shortcoming, we propose leveraging a self-attention mechanism over labels preceding the predicted step. As our approach is a Label-Attention NETwork, we call it LANET. Experimental evidence suggests that LANET outperforms the established models' performance and greatly captures interconnections between labels. For example, the micro-AUC of our approach is $0.9536$ compared to $0.7501$ for a vanilla transformer. We provide an implementation of LANET to facilitate its wider usage.
- North America > United States > Iowa (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Asia > Russia (0.04)
- (2 more...)
Frame-Level Multi-Label Playing Technique Detection Using Multi-Scale Network and Self-Attention Mechanism
Li, Dichucheng, Che, Mingjin, Meng, Wenwu, Wu, Yulun, Yu, Yi, Xia, Fan, Li, Wei
With the advancements in deep learning, deep neural networks have been increasingly used in more recent work [8, 9]. In [10], a Instrument playing technique (IPT) is a key element in enhancing convolutional recurrent neural network (CRNN) based model was the vividness of musical performance. As shown by the Guzheng proposed to classify IPTs in audio sequences concatenated by cello numbered musical notation (a musical notation system widely used notes from 5 sound banks. To alleviate the computational redundancy in China) in Fig.1, a complete automatic music transcription (AMT) caused by the sliding window in [10], Wang et al. [11] proposed system should contain IPT information in addition to pitch and onset a fully convolutional network (FCN) based end-to-end method information. IPT detection aims to classify the types of IPTs and to detect IPTs in segments concatenated by isolated Erhu notes. In locate the associated IPT boundaries in audio. IPT detection and [12], an additional onset detector was used, and its output was fused modeling can be utilized in many applications of music information with IPT prediction in a post-processing step to improve the accuracy retrieval (MIR), like performance analysis [1] and AMT [2]. of IPT detection from monophonic audio sequences concatenated by The research on IPT detection is still in its early stage.
- Media > Music (1.00)
- Leisure & Entertainment (1.00)